11 research outputs found

    On Randomized Memoryless Algorithms for the Weighted kk-server Problem

    Full text link
    The weighted kk-server problem is a generalization of the kk-server problem in which the cost of moving a server of weight βi\beta_i through a distance dd is βid\beta_i\cdot d. The weighted server problem on uniform spaces models caching where caches have different write costs. We prove tight bounds on the performance of randomized memoryless algorithms for this problem on uniform metric spaces. We prove that there is an αk\alpha_k-competitive memoryless algorithm for this problem, where αk=αk12+3αk1+1\alpha_k=\alpha_{k-1}^2+3\alpha_{k-1}+1; α1=1\alpha_1=1. On the other hand we also prove that no randomized memoryless algorithm can have competitive ratio better than αk\alpha_k. To prove the upper bound of αk\alpha_k we develop a framework to bound from above the competitive ratio of any randomized memoryless algorithm for this problem. The key technical contribution is a method for working with potential functions defined implicitly as the solution of a linear system. The result is robust in the sense that a small change in the probabilities used by the algorithm results in a small change in the upper bound on the competitive ratio. The above result has two important implications. Firstly this yields an αk\alpha_k-competitive memoryless algorithm for the weighted kk-server problem on uniform spaces. This is the first competitive algorithm for k>2k>2 which is memoryless. Secondly, this helps us prove that the Harmonic algorithm, which chooses probabilities in inverse proportion to weights, has a competitive ratio of kαkk\alpha_k.Comment: Published at the 54th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2013

    Approximating the Regular Graphic TSP in near linear time

    Get PDF
    We present a randomized approximation algorithm for computing traveling salesperson tours in undirected regular graphs. Given an nn-vertex, kk-regular graph, the algorithm computes a tour of length at most (1+7lnkO(1))n\left(1+\frac{7}{\ln k-O(1)}\right)n, with high probability, in O(nklogk)O(nk \log k) time. This improves upon a recent result by Vishnoi (\cite{Vishnoi12}, FOCS 2012) for the same problem, in terms of both approximation factor, and running time. The key ingredient of our algorithm is a technique that uses edge-coloring algorithms to sample a cycle cover with O(n/logk)O(n/\log k) cycles with high probability, in near linear time. Additionally, we also give a deterministic 32+O(1k)\frac{3}{2}+O\left(\frac{1}{\sqrt{k}}\right) factor approximation algorithm running in time O(nk)O(nk).Comment: 12 page

    Metrical Service Systems with Multiple Servers

    Full text link
    We study the problem of metrical service systems with multiple servers (MSSMS), which generalizes two well-known problems -- the kk-server problem, and metrical service systems. The MSSMS problem is to service requests, each of which is an ll-point subset of a metric space, using kk servers, with the objective of minimizing the total distance traveled by the servers. Feuerstein initiated a study of this problem by proving upper and lower bounds on the deterministic competitive ratio for uniform metric spaces. We improve Feuerstein's analysis of the upper bound and prove that his algorithm achieves a competitive ratio of k((k+ll)1)k({{k+l}\choose{l}}-1). In the randomized online setting, for uniform metric spaces, we give an algorithm which achieves a competitive ratio O(k3logl)\mathcal{O}(k^3\log l), beating the deterministic lower bound of (k+ll)1{{k+l}\choose{l}}-1. We prove that any randomized algorithm for MSSMS on uniform metric spaces must be Ω(logkl)\Omega(\log kl)-competitive. We then prove an improved lower bound of (k+2l1k)(k+l1k){{k+2l-1}\choose{k}}-{{k+l-1}\choose{k}} on the competitive ratio of any deterministic algorithm for (k,l)(k,l)-MSSMS, on general metric spaces. In the offline setting, we give a pseudo-approximation algorithm for (k,l)(k,l)-MSSMS on general metric spaces, which achieves an approximation ratio of ll using klkl servers. We also prove a matching hardness result, that a pseudo-approximation with less than klkl servers is unlikely, even for uniform metric spaces. For general metric spaces, we highlight the limitations of a few popular techniques, that have been used in algorithm design for the kk-server problem and metrical service systems.Comment: 18 pages; accepted for publication at COCOON 201

    The Randomized Competitive Ratio of Weighted k-Server Is at Least Exponential

    Get PDF
    The weighted k-server problem is a natural generalization of the k-server problem in which the cost incurred in moving a server is the distance traveled times the weight of the server. Even after almost three decades since the seminal work of Fiat and Ricklin (1994), the competitive ratio of this problem remains poorly understood even on the simplest class of metric spaces - the uniform metric spaces. In particular, in the case of randomized algorithms against the oblivious adversary, neither a better upper bound that the doubly exponential deterministic upper bound, nor a better lower bound than the logarithmic lower bound of unweighted k-server, is known. In this paper, we make significant progress towards understanding the randomized competitive ratio of weighted k-server on uniform metrics. We cut down the triply exponential gap between the upper and lower bound to a singly exponential gap by proving that the competitive ratio is at least exponential in k, substantially improving on the previously known lower bound of about ln k

    Prophet Inequality: Order selection beats random order

    Full text link
    In the prophet inequality problem, a gambler faces a sequence of items arriving online with values drawn independently from known distributions. On seeing an item, the gambler must choose whether to accept its value as her reward and quit the game, or reject it and continue. The gambler's aim is to maximize her expected reward relative to the expected maximum of the values of all items. Since the seminal work of Krengel and Sucheston (1977,1978), a tight bound of 1/2 has been known for this competitive ratio in the setting where the items arrive in an adversarial order. However, the optimum ratio still remains unknown in the order selection setting, where the gambler selects the arrival order, as well as in prophet secretary, where the items arrive in a random order. Moreover, it is not even known whether a separation exists between the two settings. In this paper, we show that the power of order selection allows the gambler to guarantee a strictly better competitive ratio than if the items arrive randomly. For the order selection setting, we identify an instance for which Peng and Tang's (FOCS'22) state-of-the-art algorithm performs no better than their claimed competitive ratio of (approximately) 0.7251, thus illustrating the need for an improved approach. We therefore extend their design and provide a more general algorithm design framework which allows the use of a different time-dependent threshold function for each item, as opposed to the common threshold function employed by Peng and Tang's algorithm. We use this framework to show that Peng and Tang's ratio can be beaten, by designing a 0.7258-competitive algorithm. For the random order setting, we improve upon Correa, Saona and Ziliotto's (SODA'19) 0.732-hardness result to show a hardness of 0.7254 for general algorithms, thus establishing a separation between the order selection and random order settings

    Set Cover with Delay - Clairvoyance Is Not Required

    Get PDF
    In most online problems with delay, clairvoyance (i.e. knowing the future delay of a request upon its arrival) is required for polylogarithmic competitiveness. In this paper, we show that this is not the case for set cover with delay (SCD) - specifically, we present the first non-clairvoyant algorithm, which is O(log n log m)-competitive, where n is the number of elements and m is the number of sets. This matches the best known result for the classic online set cover (a special case of non-clairvoyant SCD). Moreover, clairvoyance does not allow for significant improvement - we present lower bounds of ?(?{log n}) and ?(?{log m}) for SCD which apply for the clairvoyant case. In addition, the competitiveness of our algorithm does not depend on the number of requests. Such a guarantee on the size of the universe alone was not previously known even for the clairvoyant case - the only previously-known algorithm (due to Carrasco et al.) is clairvoyant, with competitiveness that grows with the number of requests. For the special case of vertex cover with delay, we show a simpler, deterministic algorithm which is 3-competitive (and also non-clairvoyant)

    Min-Cost Bipartite Perfect Matching with Delays

    Get PDF
    In the min-cost bipartite perfect matching with delays (MBPMD) problem, requests arrive online at points of a finite metric space. Each request is either positive or negative and has to be matched to a request of opposite polarity. As opposed to traditional online matching problems, the algorithm does not have to serve requests as they arrive, and may choose to match them later at a cost. Our objective is to minimize the sum of the distances between matched pairs of requests (the connection cost) and the sum of the waiting times of the requests (the delay cost). This objective exhibits a natural tradeoff between minimizing the distances and the cost of waiting for better matches. This tradeoff appears in many real-life scenarios, notably, ride-sharing platforms. MBPMD is related to its non-bipartite variant, min-cost perfect matching with delays (MPMD), in which each request can be matched to any other request. MPMD was introduced by Emek et al. (STOC\u2716), who showed an O(log^2(n)+log(Delta))-competitive randomized algorithm on n-point metric spaces with aspect ratio Delta. Our contribution is threefold. First, we present a new lower bound construction for MPMD and MBPMD. We get a lower bound of Omega(sqrt(log(n)/log(log(n)))) on the competitive ratio of any randomized algorithm for MBPMD. For MPMD, we improve the lower bound from Omega(sqrt(log(n))) (shown by Azar et al., SODA\u2717) to Omega(log(n)/log(log(n))), thus, almost matching their upper bound of O(log(n)). Second, we adapt the algorithm of Emek et al. to the bipartite case, and provide a simplified analysis that improves the competitive ratio to O(log(n)). The key ingredient of the algorithm is an O(h)-competitive randomized algorithm for MBPMD on weighted trees of height h. Third, we provide an O(h)-competitive deterministic algorithm for MBPMD on weighted trees of height h. This algorithm is obtained by adapting the algorithm for MPMD by Azar et al. to the apparently more complicated bipartite setting

    Testing Graph Clusterability: Algorithms and Lower Bounds

    No full text
    We consider the problem of testing graph cluster structure: given access to a graph G = (V, E), can we quickly determine whether the graph can be partitioned into a few clusters with good inner conductance, or is far from any such graph? This is a generalization of the well-studied problem of testing graph expansion, where one wants to distinguish between the graph having good expansion (i.e. being a good single cluster) and the graph having a sparse cut (i.e. being a union of at least two clusters). A recent work of Czumaj, Peng, and Sohler (STOC'15) gave an ingenious sublinear time algorithm for testing k-clusterability in time (O) over tilde (n(1/2)poly(k)). Their algorithm implicitly embeds a random sample of vertices of the graph into Euclidean space, and then clusters the samples based on estimates of Euclidean distances between the points. This yields a very efficient testing algorithm, but only works if the cluster structure is very strong: it is necessary to assume that the gap between conductances of accepted and rejected graphs is at least logarithmic in the size of the graph G. In this paper we show how one can leverage more refined geometric information, namely angles as opposed to distances, to obtain a sublinear time tester that works even when the gap is a sufficiently large constant. Our tester is based on the singular value decomposition of a natural matrix derived from random walk transition probabilities from a small sample of seed nodes.We complement our algorithm with a matching lower bound on the query complexity of testing clusterability. Our lower bound is based on a novel property testing problem, which we analyze using Fourier analytic tools. As a byproduct of our techniques, we also achieve new lower bounds for the problem of approximating MAX-CUT value in sublinear time
    corecore